603 research outputs found

    Assessment of information-driven decision-making in the SME

    Get PDF
    The use of analytics in decision -making processes is a key element for organizations to be competitive. However, experience indicates that many organizations still have not managed to fully understand how to use properly the available data for diagnosing, improving a nd controlling processes or modelling, predicting and discovering business opportunities. This situation is even more exaggerated among small and medium enterprises (SMEs). An essential first step for SMEs to start using analytics is a correct assessment o f their decision -making processes and use of data. This will help them understanding their current situation, seeing the potential of adopting analytical practices and decide their approach to analytics. Therefore, the assessment we propose is managerial a nd strategic; thus, it is not aimed at detecting problems such as: errors in the data to make an invoice, not having the correct version of a drawing in the shop or a wrong date in a project plan... Undoubtedly, t hese issues are very important but they are not the objective. The results from applying the proposed assessment tool in several pilot SMEs are expected to serve as the basis for improving the tool and developing a maturity model and a roadmap for improving their proficiency in information -driven d ecision -makingPostprint (published version

    REPP-H: runtime estimation of power and performance on heterogeneous data centers

    Get PDF
    Modern data centers increasingly demand improved performance with minimal power consumption. Managing the power and performance requirements of the applications is challenging because these data centers, incidentally or intentionally, have to deal with server architecture heterogeneity [19], [22]. One critical challenge that data centers have to face is how to manage system power and performance given the different application behavior across multiple different architectures.This work has been supported by the EU FP7 program (Mont-Blanc 2, ICT-610402), by the Ministerio de Economia (CAP-VII, TIN2015-65316-P), and the Generalitat de Catalunya (MPEXPAR, 2014-SGR-1051). The material herein is based in part upon work supported by the US NSF, grant numbers ACI-1535232 and CNS-1305220.Peer ReviewedPostprint (author's final draft

    Efficient heuristics for the parallel blocking flow shop scheduling problem

    Get PDF
    We consider the NP-hard problem of scheduling n jobs in F identical parallel flow shops, each consisting of a series of m machines, and doing so with a blocking constraint. The applied criterion is to minimize the makespan, i.e., the maximum completion time of all the jobs in F flow shops (lines). The Parallel Flow Shop Scheduling Problem (PFSP) is conceptually similar to another problem known in the literature as the Distributed Permutation Flow Shop Scheduling Problem (DPFSP), which allows modeling the scheduling process in companies with more than one factory, each factory with a flow shop configuration. Therefore, the proposed methods can solve the scheduling problem under the blocking constraint in both situations, which, to the best of our knowledge, has not been studied previously. In this paper, we propose a mathematical model along with some constructive and improvement heuristics to solve the parallel blocking flow shop problem (PBFSP) and thus minimize the maximum completion time among lines. The proposed constructive procedures use two approaches that are totally different from those proposed in the literature. These methods are used as initial solution procedures of an iterated local search (ILS) and an iterated greedy algorithm (IGA), both of which are combined with a variable neighborhood search (VNS). The proposed constructive procedure and the improved methods take into account the characteristics of the problem. The computational evaluation demonstrates that both of them –especially the IGA– perform considerably better than those algorithms adapted from the DPFSP literature.Peer ReviewedPostprint (author's final draft

    RePP-C: runtime estimation of performance-power with workload consolidation in CMPs

    Get PDF
    Configuration of hardware knobs in multicore environments for meeting performance-power demands constitutes a desirable feature in modern data centers. At the same time, high energy efficiency (performance per watt) requires optimal thread-to-core assignment. In this paper, we present the runtime estimator (RePP-C) for performance-power, characterized by processor frequency states (P-states), a wide range of sleep intervals (Cl-states) and workload consolidation. We also present a schema for frequency and contention-aware thread-to-core assignment (FACTS) which considers various thread demands. The proposed solution (RePP-C) selects a given hardware configuration for each active core to ensure that the performance-power demands are satisfied while using the scheduling schema (FACTS) for mapping threads-to-cores. Our results show that FACTS improves over other state-of-the-art schedulers like Distributed Intensity Online (DIO) and native Linux scheduler by 8.25% and 37.56% in performance, with simultaneous improvement in energy efficiency by 6.2% and 14.17%, respectively. Moreover, we prove the usability of RePP-C by predicting performance and power for 7 different types of workloads and 10 different QoS targets. The results show an average error of 7.55% and 8.96% (with 95% confidence interval) when predicting energy and performance respectively.This work has been partially supported by the European Union FP7 program through the Mont-Blanc-2 project (FP7-ICT-610402), by the Ministerio de Economia y Competitividad under contract Computacion de Altas Prestaciones VII (TIN2015-65316-P), and the Departament d’Innovacio, Universitats i Empresa de la Generalitat de Catalunya, under project MPEXPAR: Models de Programacio i Entorns d’Execucio Paral.lels (2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    Parallel programming issues and what the compiler can do to help

    Get PDF
    Twenty-first century parallel programming models are becoming real complex due to the diversity of architectures they need to target (Multi- and Many-cores, GPUs, FPGAs, etc.). What if we could use one programming model to rule them all, one programming model to find them, one programming model to bring them all and in the darkness bind them, in the land of MareNostrum where the Applications lie. OmpSs programming model is an attempt to do so, by means of compiler directives. Compilers are essential tools to exploit applications and the architectures the run on. In this sense, compiler analysis and optimization techniques have been widely studied, in order to produce better performing and less consuming codes. In this paper we present two uses of several analyses we have implemented in the Mercurium[3] source-to-source compiler: a) the first use is to help users with correctness hints regarding the usage of the OpenMP and OmpSs tasks; b) the second use is to be able to execute OpenMP in embedded systems, with very little memory, thanks to calculating the Task Dependency Graph of the application at compile time. We also present the next steps of our work: a) extending range analysis for analyzing OpenMP and OmpSs recursive applications, and b) modeling applications using OmpSs and future OpenMP4.1 tasks priorities feature

    Parallel programming issues and what the compiler can do to help

    Get PDF
    Twenty-first century parallel programming models are becoming real complex due to the diversity of architectures they need to target (Multi- and Many-cores, GPUs, FPGAs, etc.). What if we could use one programming model to rule them all, one programming model to find them, one programming model to bring them all and in the darkness bind them, in the land of MareNostrum where the Applications lie. OmpSs programming model is an attempt to do so, by means of compiler directives. Compilers are essential tools to exploit applications and the architectures the run on. In this sense, compiler analysis and optimization techniques have been widely studied, in order to produce better performing and less consuming codes. In this paper we present two uses of several analyses we have implemented in the Mercurium[3] source-to-source compiler: a) the first use is to help users with correctness hints regarding the usage of the OpenMP and OmpSs tasks; b) the second use is to be able to execute OpenMP in embedded systems, with very little memory, thanks to calculating the Task Dependency Graph of the application at compile time. We also present the next steps of our work: a) extending range analysis for analyzing OpenMP and OmpSs recursive applications, and b) modeling applications using OmpSs and future OpenMP4.1 tasks priorities feature

    Inception: we need to go wider

    Get PDF

    Using shared-data localization to reduce the cost of inspector-execution in unified-parallel-C programs

    Get PDF
    Programs written in the Unified Parallel C (UPC) language can access any location of the entire local and remote address space via read/write operations. However, UPC programs that contain fine-grained shared accesses can exhibit performance degradation. One solution is to use the inspector-executor technique to coalesce fine-grained shared accesses to larger remote access operations. A straightforward implementation of the inspector executor transformation results in excessive instrumentation that hinders performance.; This paper addresses this issue and introduces various techniques that aim at reducing the generated instrumentation code: a shared-data localization transformation based on Constant-Stride Linear Memory Descriptors (CSLMADs) [S. Aarseth, Gravitational N-Body Simulations: Tools and Algorithms, Cambridge Monographs on Mathematical Physics, Cambridge University Press, 2003.], the inlining of data locality checks and the usage of an index vector to aggregate the data. Finally, the paper introduces a lightweight loop code motion transformation to privatize shared scalars that were propagated through the loop body.; A performance evaluation, using up to 2048 cores of a POWER 775, explores the impact of each optimization and characterizes the overheads of UPC programs. It also shows that the presented optimizations increase performance of UPC programs up to 1.8 x their UPC hand-optimized counterpart for applications with regular accesses and up to 6.3 x for applications with irregular accesses.Peer ReviewedPostprint (author's final draft

    Reducing variability of a critical dimension

    Get PDF
    Peer ReviewedPostprint (author's final draft

    Application of Kansei Engineering to Design an Industrial Enclosure

    Get PDF
    Kansei Engineering (KE) is a technique used to incorporate emotions in the product design process. Its basic purpose is discovering in which way some properties of a product convey certain emotions in its users. It is a quantitative method, and data is typically collected using questionnaires. Japanese researcher Mitsuo Nagamachi is the founder of Kansei Engineering. Products where KE has been successfully app lied include cars, phones, packaging, house appliances, clothes or websites, among others. Kansei Engineering studies typically follow a model with three main steps: (1) spanning the semantic space: defining the responses, those emotions that will be studi ed; (2) spanning the space of properties: deciding on the technical properties of the products that can be freely changed and that might affect the responses (factors in a DOE factorial design) and (3) the synthesis phase, where both spaces are linked (that is, how each factor affects each response is discovered). We claimed that KE is a good example of what Roger W. Hoerl and Ron Snee call statistical engineering: focusing not in advancement of statistics developing new techniques, fine tuning existing ones–but on how current techniques can be best used in a new area. This presentation is a practical application of the ideas exposed there to the design of electrical enclosures. The paper shows how well known statistical methods (DOE, principal component analysis and regression analysis) are used together in conjunction with other non statistical techniques and in the presence of practical real world restrictions to discover how different technical characteristics of the enclosures affect the selected “emotions”Postprint (published version
    • …
    corecore